Search Results: "madduck"

24 October 2011

Gregor Herrmann: managing $HOME

I'm not the first one who thought about managing their dotfiles & scripts in a (D)VCS in order to be able to have them in sync across different machines, & I even haven't re-invented the wheel for doing so. short report on my experiences:

after looking around a bit I decided to try movein (info page with links to downloads & a README, blog post, git repo) because it looked liked it would easily do what I'd like to do without getting in my way too much; which means e.g.: no symlink farm, & only one simple command to manage the whole setup.

movein is a simple shell script that allows to setup & manage multiple modular repos tracking different sets of files quite easily, taking advantage of the powers of git & mr. & being written in shell means I also was able to make a few changes, which stew kindly accepted & applied to his tree already.

in fact I'm just starting to use it for real, but it looks good so far, & I think I'm going to stick with it for the time being.

2 minor observations: </promotion_time>

14 October 2011

Martin F. Krafft: Deutsche Bahn frequent traveller: a joke!

It has been a while since I last ranted about the Deutsche Bahn, our national train service monopoly. Out of necessity, I ve since become one of their frequent travellers. Together with the spiffy, silver card, I received a pamphlet, in which the advantages of frequent travellers are listed. When I compare those advantages to what was promised, I cannot help but notice quite a few differences to my disadvantage. Since I refused to believe that the Deutsche Bahn could be this stupid, I double-checked with the service hotline, and I now have the information to report:
  1. While the website promises free access to DB lounges for two , the pamphlet clarifies this: free access to DB lounges for you and your partner, provided each of you owns a first-class, long-distance ticket. What they left out: Oh, you don t need the frequent traveller status for that, the ticket suffices. Or, put differently: the frequent traveller status does not give you any access to the lounges. False advertising, anyone?
  2. The pamphlet explains that there are special seat areas reserved for frequent travellers. However, one must not be a frequent traveller to use them. So should I expect people to prompt me to show my card or clear my seat. And should I be expected to prompt people to flash the card or leave? Not feasible, anyone?
  3. I am told that I get priority treatment at the counter, except there is only one counter (in Munich), usually with a line of people for 1st-class and frequent travellers. All other travellers get delegated to 12 counters by an efficient number system, which means one is better off picking a number and standing in line. The other day, a lady came and flashed her frequent traveller card, expecting people to make way, but obviously noone did. Did they actually think about this, anyone?
I don t need to go into detail on the other benefits : they claim that there is special, reserved parking, but that s probably only on paper. They claim reductions in hotels and rental cars, but probably limited to availability, they claim exclusive events, but those are likely the ones noone goes to anyway. And they claim a service hotline, but it s a premium-rate number. So all in all, Deutsche Bahn have once again managed to disappoint. The frequent traveller card does not give any benefits. It rather makes me regret having spent so much money on this company. Scratch frequent traveller , make it repeat idiot . NP: Steven Wilson: Grace for Drowning

30 September 2011

Martin F. Krafft: Archiving web pages revisited

An e-mail by Andreas Schamanek had me revisit the topic of archiving web pages. Andreas pointed me to the MHT format, which bundles HTML pages and their dependencies into a plain text file using MIME. Internet Explorer apparently already handles this format, and UnMHT provides software for the other browsers. As Firefox 6 is not yet supported, I went to try the Mozilla Archive Format extension, which seems to do the same thing and works quite well so well (on first sight) that I wanted to share it with you. NP: Tortoise: Standards

29 September 2011

Martin F. Krafft: Liable for other peoples' debts

Today, German politicians decided, that Germany be liable for up to 211 billion Euros for the debts of other EU countries. Or, put differently: the politicians put the money of the current and future generations on the line for a country that lived way above its capabilities for too many years. When the EU was founded, it was explicitly stated that no country needs ever be liable for any other. I understand that politics is hard, and letting Greece (and others) fall down might carry heavy, unforeseen consequences. Also, I understand that the Greek people are mostly innocent in all this and that the fault lies with their politicians and other corrupt entities in the nation. However, what s happening these days is beyond the comprehensible. If I were in a non-Germany EU country, I d rejoice and continue making debts. It is likely that this won t be the last time that our politicians cave in to pressure by other nations who have a lesser understanding of budgeting and saving. Since I am German, I can only shake my head, look to Berlin and ask myself whether this is the final straw that broke the camel s back. How the heck do the people over there ever want to regain the trust of their people? Politics has become the game of pleasing each other, who cares about the people? And the German politicians are (once again, remember credit default swaps?) at the forefront of this stupidity. To me, there is only one solution to Greece s debts: make sure that what happened can never happen again, and then cut the debts, or slice them in half. Let the banks carry the weight, for it was them who gave out the loans too liberally. And if this forces a bank or two to default, let it happen, for fuck s sake. The consequences might be dire, but they ll subside. And that s surely better than trying to pretend that we can keep juggling this heavily inflated financial system. Instead, the executives, elected to carry the trust of the people, are setting precedents for other countries to follow Greece, for it is likely that they wil be bailed out. By us. That is not the way to teach anyone the basics of economy: you can only spend as much as you earn, without exceptions. Debts will only come around to hurt you. I could puke. NP: Godspeed You Black Emperor!: F# A#

26 July 2011

Martin F. Krafft: Wisdom tooth left in Bosnia

For those who care or wonder: the reason why I hold a white icepack to my cheek here at Debconf11 in Banja Luka, Bosnia & Hercegovina, talk fairly little and try not to smile is because I had one of my wisdom teeth removed this morning by one of the local dentists. Some might cringe at the idea of submitting yourself to such a treatment in Bosnia, but I have to say that Doctor Sa a Dabi did a splendid job, even though we weren t really able to communicate a lot. Still, 45 minutes after I entered the office, I saw my tooth on the table and was able to leave again. My tooth had been building up an infection for several weeks, and it was starting to become unbearable. Therefore I decided to simply bite the bullet, after having seen the x-rays and judging that it wouldn t be too hard to remove. It wasn t, the pain is now minor, the swelling mostly under control, the drugs are beer-compatible, and you all should just enjoy while I cannot talk for tomorrow I ll be back!

8 June 2011

Martin F. Krafft: World IPv6 Day: ask your provider now

Today is World IPv6 Day. Please take a moment to test your connectivity, and if you are not IPv6-enabled yet, then send an e-mail to your provider or hoster and ask them for native IPv6 connectivity on your uplink. Do it even if you do not know what I am talking about or you don t care. The reason is quite simply that we re already too late and hence should act without further delay. If IPv6 network effects do not pick up and adoption rate increases, the big players will drive up the prices for everyone. Then you will find yourself locked in and paying. Or you simply won t be able to address individual computers anymore but always be forced to proxy via commercial providers and forced to say how high when they ask you to jump. Remember that they are commercial entities who might claim to act in the interest of their customers, but you are actually second to their profits. Here are some answers to frequently asked questions if you want to know more. PS: Google, having been so vocal about World IPv6 Day, I would have at least expected you to change your logo today! NP: Monkey3: 39 Laps

4 June 2011

Mike (stew) O'Connor: My Movein Script

Erich Schubert's blog post reminded me that I've been meaning to writeup a post detailing how I'm keeping parts of my $HOME in git repositories. My goal has been to keep my home directory in a version control system effectively. I have a number of constraints however. I want the system to be modular. I don't always need X related config files in my home directory. Sometimes I want just my zsh related files and my emacs related files. I have multiple machines I check email from, and on those want to keep my notmuch/offlineimap files in sync, but I don't need these on every machine I'm on, expecially since those configurations have more sensitive data. I played around with laysvn for a while, but it never really seemed comfortable. I more recently discovered that madduck had started a "vcs-home" website and mailing list, talking about doing what I'm trying to do. I'm now going with madduck's idea of using git with detached work trees, so that I can have multiple git repositories all using $HOME as their $GIT_WORK_TREE. I have a script inspired by his vcsh script that will create a subshell where the GIT_DIR, GIT_WORK_TREE variables are set for me. I can do my git operations related to just one git repository in that shell, while still operating directly on my config files in $HOME, and avoiding any kind of nasty symlinking or hardlinking. Since I am usually using my script to allow me to quickly "move in" to a new host, I named my script "movein". It can be found here. Here's how I'll typically use it:
    stew@guppy:~$ movein init
    git server hostname? git.vireo.org
    path to remote repositories? [~/git] 
    Local repository directory? [~/.movein] 
    Location of .mrconfig file? [~/.mrconfig] 
    stew@guppy:~$ 
This is just run once. It asks me questions about how to setup the 'movein' environment. Now I should have a .moveinrc storing the answers I gave above, I have a stub of a .mrconfig, and an empty .movein directory. Next thing to do is to add some of my repositories. The one I typically add on all machines is my "shell" repository. It has a .bashrc/.zshrc, an .alias that both source and other zsh goodies I'll generally wish to be around:
    stew@guppy:~$ ls .zshrc
    ls: cannot access .zshrc: No such file or directory
    stew@guppy:~$ movein add shell
    Initialized empty Git repository in /home/stew/.movein/shell.git/
    remote: Counting objects: 42, done.
    remote: Compressing objects: 100% (39/39), done.
    remote: Total 42 (delta 18), reused 0 (delta 0)
    Unpacking objects: 100% (42/42), done.
    From ssh://git.vireo.org//home/stew/git/shell
     * [new branch]      master     -> origin/master
    stew@guppy:~$ ls .zshrc
    .zshrc
So what happened here is that the ssh://git.vireo.org/~/git/shell.git repository was cloned with GIT_WORK_TREE=~ and GIT_DIR=.movein/shell.git. My .zshrc (along with a bunch of other files) has appeared. Next perhaps I'll add my emacs config files:
    stew@guppy:~$ movein add emacs       
    Initialized empty Git repository in /home/stew/.movein/emacs.git/
    remote: Counting objects: 77, done.
    remote: Compressing objects: 100% (63/63), done.
    remote: Total 77 (delta 10), reused 0 (delta 0)
    Unpacking objects: 100% (77/77), done.
    From ssh://git.vireo.org//home/stew/git/emacs
     * [new branch]      emacs21    -> origin/emacs21
     * [new branch]      master     -> origin/master
    stew@guppy:~$ ls .emacs
    .emacs
    stew@guppy:~$ 
My remote repositry has a master branch, but also has an emacs21 branch, which I can use when checking out on older machines which don't yet have newer versions of emacs. Let's say I have made changes to my .zshrc file, and I want to check them in. Since we are working with detached work trees, git can't immediately help us:
    stew@guppy:~$ git status
    fatal: Not a git repository (or any of the parent directories): .git
The movein script allows me to "login" to one of the repositories. It will create a subshell with GIT_WORK_TREE and GIT_DIR set. In that subshell, git operations operate as one might expect:
    stew@guppy:~ $ movein login shell
    stew@guppy:~ (shell:master>*) $ echo >> .zshrc
    stew@guppy:~ (shell:master>*) $ git add .zshrc                                       
    stew@guppy:~ (shell:master>*) $ git commit -m "adding a newline to the end of .zshrc"
    [master 81b7311] adding a newline to the end of .zshrc
     1 files changed, 1 insertions(+), 0 deletions(-)
    stew@guppy:~ (shell:master>*) $ git push
    Counting objects: 8, done.
    Delta compression using up to 2 threads.
    Compressing objects: 100% (6/6), done.
    Writing objects: 100% (6/6), 546 bytes, done.
    Total 6 (delta 4), reused 0 (delta 0)
    To ssh://git.vireo.org//home/stew/git/shell.git
       d24bf2d..81b7311  master -> master
    stew@guppy:~ (shell:master*) $ exit
    stew@guppy:~ $ 
If I want to create a brand new repository from files in my home directory. I can:
    stew@guppy:~ $ touch methere
    stew@guppy:~ $ touch mealsothere
    stew@guppy:~ $ movein new oohlala methere mealsothere
    Initialized empty Git repository in /home/stew/git/oohlala.git/
    Initialized empty Git repository in /home/stew/.movein/oohlala.git/
    [master (root-commit) 7abe5ba] initial checkin
     0 files changed, 0 insertions(+), 0 deletions(-)
     create mode 100644 mealsothere
     create mode 100644 methere
    Counting objects: 3, done.
    Delta compression using up to 2 threads.
    Compressing objects: 100% (2/2), done.
    Writing objects: 100% (3/3), 224 bytes, done.
    Total 3 (delta 0), reused 0 (delta 0)
    To ssh://git.vireo.org//home/stew/git/oohlala.git
     * [new branch]      master -> master
Above, the command movein new oohlala methere mealsothere says "create a new repository containing two files: methere, mealsothere". A bare repository is created on the remote machine, a repository is created in the .movein directory, the files are committed, and the new commit is pushed to the remote repository. New on some other machine, I could run movein add oohlala to get these two new files. The movein script maintains a .mrconfig file, so that joeyh's mr tool can be used to manage the repositories in bulk. Commands like "mr update", "mr commit", "mr push" will act on all the known repositories. Here's an example:
    stew@guppy:~ $ cat .mrconfig
    [DEFAULT]
    include = cat /usr/share/mr/git-fake-bare
    [/home/stew/.movein/emacs.git]
    checkout = git_fake_bare_checkout 'ssh://git.vireo.org//home/stew/git/emacs.git' 'emacs.git' '../../'
    [/home/stew/.movein/shell.git]
    checkout = git_fake_bare_checkout 'ssh://git.vireo.org//home/stew/git/shell.git' 'shell.git' '../../'
    [/home/stew/.movein/oohlala.git]
    checkout = git_fake_bare_checkout 'ssh://git.vireo.org//home/stew/git/oohlala.git' 'oohlala.git' '../../'
    stew@guppy:~ $ mr update
    mr update: /home/stew//home/stew/.movein/emacs.git
    From ssh://git.vireo.org//home/stew/git/emacs
     * branch            master     -> FETCH_HEAD
    Already up-to-date.
    mr update: /home/stew//home/stew/.movein/oohlala.git
    From ssh://git.vireo.org//home/stew/git/oohlala
     * branch            master     -> FETCH_HEAD
    Already up-to-date.
    mr update: /home/stew//home/stew/.movein/shell.git
    From ssh://git.vireo.org//home/stew/git/shell
     * branch            master     -> FETCH_HEAD
    Already up-to-date.
    mr update: finished (3 ok)
    stew@guppy:~ $ mr update        
There are still issues I'd like to address. The big one in my mind is that there is no .gitignore. So when you "movein login somerepository" then run "git status", It tells you about hundreds of untracked files in your home directory. Ideally, I just want to know about the files which are already associated with the repository I'm logged into.

Mike (stew) O'Connor: My Movin Script

Erich Schubert's blog post reminded me that I've been meaning to writeup a post detailing how I'm keeping parts of my $HOME in git repositories. My goal has been to keep my home directory in a version control system effectively. I have a number of constraints however. I want the system to be modular. I don't always need X related config files in my home directory. Sometimes I want just my zsh related files and my emacs related files. I have multiple machines I check email from, and on those want to keep my notmuch/offlineimap files in sync, but I don't need these on every machine I'm on, expecially since those configurations have more sensitive data. I played around with laysvn for a while, but it never really seemed comfortable. I more recently discovered that madduck had started a "vcs-home" website and mailing list, talking about doing what I'm trying to do. I'm now going with madduck's idea of using git with detached work trees, so that I can have multiple git repositories all using $HOME as their $GIT_WORK_TREE. I have a script inspired by his vcsh script that will create a subshell where the GIT_DIR, GIT_WORK_TREE variables are set for me. I can do my git operations related to just one git repository in that shell, while still operating directly on my config files in $HOME, and avoiding any kind of nasty symlinking or hardlinking. Since I am usually using my script to allow me to quickly "move in" to a new host, I named my script "movein". It can be found here. Here's how I'll typically use it:
    stew@guppy:~$ movein init
    git server hostname? git.vireo.org
    path to remote repositories? [~/git] 
    Local repository directory? [~/.movein] 
    Location of .mrconfig file? [~/.mrconfig] 
    stew@guppy:~$ 
This is just run once. It asks me questions about how to setup the 'movein' environment. Now I should have a .moveinrc storing the answers I gave above, I have a stub of a .mrconfig, and an empty .movein directory. Next thing to do is to add some of my repositories. The one I typically add on all machines is my "shell" repository. It has a .bashrc/.zshrc, an .alias that both source and other zsh goodies I'll generally wish to be around:
    stew@guppy:~$ ls .zshrc
    ls: cannot access .zshrc: No such file or directory
    stew@guppy:~$ movein add shell
    Initialized empty Git repository in /home/stew/.movein/shell.git/
    remote: Counting objects: 42, done.
    remote: Compressing objects: 100% (39/39), done.
    remote: Total 42 (delta 18), reused 0 (delta 0)
    Unpacking objects: 100% (42/42), done.
    From ssh://git.vireo.org//home/stew/git/shell
     * [new branch]      master     -> origin/master
    stew@guppy:~$ ls .zshrc
    .zshrc
So what happened here is that the ssh://git.vireo.org/~/git/shell.git repository was cloned with GIT_WORK_TREE=~ and GIT_DIR=.movein/shell.git. My .zshrc (along with a bunch of other files) has appeared. Next perhaps I'll add my emacs config files:
    stew@guppy:~$ movein add emacs       
    Initialized empty Git repository in /home/stew/.movein/emacs.git/
    remote: Counting objects: 77, done.
    remote: Compressing objects: 100% (63/63), done.
    remote: Total 77 (delta 10), reused 0 (delta 0)
    Unpacking objects: 100% (77/77), done.
    From ssh://git.vireo.org//home/stew/git/emacs
     * [new branch]      emacs21    -> origin/emacs21
     * [new branch]      master     -> origin/master
    stew@guppy:~$ ls .emacs
    .emacs
    stew@guppy:~$ 
My remote repositry has a master branch, but also has an emacs21 branch, which I can use when checking out on older machines which don't yet have newer versions of emacs. Let's say I have made changes to my .zshrc file, and I want to check them in. Since we are working with detached work trees, git can't immediately help us:
    stew@guppy:~$ git status
    fatal: Not a git repository (or any of the parent directories): .git
The movein script allows me to "login" to one of the repositories. It will create a subshell with GIT_WORK_TREE and GIT_DIR set. In that subshell, git operations operate as one might expect:
    stew@guppy:~ $ movein login shell
    stew@guppy:~ (shell:master>*) $ echo >> .zshrc
    stew@guppy:~ (shell:master>*) $ git add .zshrc                                       
    stew@guppy:~ (shell:master>*) $ git commit -m "adding a newline to the end of .zshrc"
    [master 81b7311] adding a newline to the end of .zshrc
     1 files changed, 1 insertions(+), 0 deletions(-)
    stew@guppy:~ (shell:master>*) $ git push
    Counting objects: 8, done.
    Delta compression using up to 2 threads.
    Compressing objects: 100% (6/6), done.
    Writing objects: 100% (6/6), 546 bytes, done.
    Total 6 (delta 4), reused 0 (delta 0)
    To ssh://git.vireo.org//home/stew/git/shell.git
       d24bf2d..81b7311  master -> master
    stew@guppy:~ (shell:master*) $ exit
    stew@guppy:~ $ 
If I want to create a brand new repository from files in my home directory. I can:
    stew@guppy:~ $ touch methere
    stew@guppy:~ $ touch mealsothere
    stew@guppy:~ $ movein new oohlala methere mealsothere
    Initialized empty Git repository in /home/stew/git/oohlala.git/
    Initialized empty Git repository in /home/stew/.movein/oohlala.git/
    [master (root-commit) 7abe5ba] initial checkin
     0 files changed, 0 insertions(+), 0 deletions(-)
     create mode 100644 mealsothere
     create mode 100644 methere
    Counting objects: 3, done.
    Delta compression using up to 2 threads.
    Compressing objects: 100% (2/2), done.
    Writing objects: 100% (3/3), 224 bytes, done.
    Total 3 (delta 0), reused 0 (delta 0)
    To ssh://git.vireo.org//home/stew/git/oohlala.git
     * [new branch]      master -> master
Above, the command movein new oohlala methere mealsothere says "create a new repository containing two files: methere, mealsothere". A bare repository is created on the remote machine, a repository is created in the .movein directory, the files are committed, and the new commit is pushed to the remote repository. New on some other machine, I could run movein add oohlala to get these two new files. The movein script maintains a .mrconfig file, so that joeyh's mr tool can be used to manage the repositories in bulk. Commands like "mr update", "mr commit", "mr push" will act on all the known repositories. Here's an example:
    stew@guppy:~ $ cat .mrconfig
    [DEFAULT]
    include = cat /usr/share/mr/git-fake-bare
    [/home/stew/.movein/emacs.git]
    checkout = git_fake_bare_checkout 'ssh://git.vireo.org//home/stew/git/emacs.git' 'emacs.git' '../../'
    [/home/stew/.movein/shell.git]
    checkout = git_fake_bare_checkout 'ssh://git.vireo.org//home/stew/git/shell.git' 'shell.git' '../../'
    [/home/stew/.movein/oohlala.git]
    checkout = git_fake_bare_checkout 'ssh://git.vireo.org//home/stew/git/oohlala.git' 'oohlala.git' '../../'
    stew@guppy:~ $ mr update
    mr update: /home/stew//home/stew/.movein/emacs.git
    From ssh://git.vireo.org//home/stew/git/emacs
     * branch            master     -> FETCH_HEAD
    Already up-to-date.
    mr update: /home/stew//home/stew/.movein/oohlala.git
    From ssh://git.vireo.org//home/stew/git/oohlala
     * branch            master     -> FETCH_HEAD
    Already up-to-date.
    mr update: /home/stew//home/stew/.movein/shell.git
    From ssh://git.vireo.org//home/stew/git/shell
     * branch            master     -> FETCH_HEAD
    Already up-to-date.
    mr update: finished (3 ok)
    stew@guppy:~ $ mr update        
There are still issues I'd like to address. The big one in my mind is that there is no .gitignore. So when you "movein login somerepository" then run "git status", It tells you about hundreds of untracked files in your home directory. Ideally, I just want to know about the files which are already associated with the repository I'm logged into.

2 June 2011

Erich Schubert: Managing user configuration files

Dear Lazyweb,
How do you manage your user configuration files? I have around four home directories I frequently use. They are sufficiently well enough in sync, but I have been considering to actually use some file management to synchronize them better. I'm talking about files such as shell config, ssh config, .vimrc etc.
I had some discussions about this before, and the consensus had been that some version control system probably is best. Git seemed to be a good candidate; I remember having read about things like this a dozen years ago when CVS was still common and Subversion was new.
So dear lazyweb, what are your experiences with managing your user configuration? What setup would you recommend?
Update: See vcs-home for various related links and at least five different ways of doing this. mr, a multi-repository VCS wrapper seems particularly well at this.

28 January 2011

Martin F. Krafft: The Phoenix Foundation in Switzerland

We ve known for a while and want to keep it no longer secret: New Zealand s famous band The Phoenix Foundation are in Europe at the moment, and will come to Switzerland on 17 and 18 February to play in Lausanne and Zurich. Penny went ecstatic when she found out and joined the street team, and we now have no excuses but to go to both shows. I am certainly looking forward. Even though I haven t really warmed up to their last two outputs (Buffalo and the Merry Kriskmass EP), their earlier stuff is heart-warming good-mood music that should put me back into chilled NZ summer mode. Choice! NP: The Phoenix Foundation: Buffalo

17 December 2010

Pietro Abate: debian git packaging with git upstream

Update There is an easier method to do all this using gbp-clone as described here. Ah ! Then to build the package, you just need to suggest git-buildpackage where to find the pristin-tar :
git-buildpackage --git-upstream-branch=upstream/master
or you could simply describe (as suggested) the layout in debian/gbp.conf. Easy !!!
I've found a lot of different recipes and howtos about git debian packaging, but I failed to find one simple recipe to create a debian package from scratch when upstream is using git. Of course the following is a big patchwork from many different sources. First we need to do a bit of administrative work to setup the repository :
mkdir yourpackage
cd yourpackage
git init --shared
Then, since I'm interested in tracking upstream development branch I'm going to add a remote branch to my repo:
git remote add upstream git://the.url/here.git
at this point I need to fetch upstream and create a branch for it.
git fetch upstream
git checkout -b upstream upstream/master
Now in my repo I have a master branch and an upstream branch. So far, so good. Let's add the debian branch based on master:
git checkout master
git checkout -b debian master
It's in the debian branch where I'm going to keep the debian related files. I'm finally read for hacking git add / git commit / git remove ... When I'm done, I can switch to master, merge the debian branch into it and use git-buildpackage to build the package.
git checkout master
git branch
debian
* master
upstream

git merge debian
git-buildpackage
Suppose I want to put everything on gitourious for example. I'll create an acocunt, set up my ssh pub key and then I've to add an origin ref in my .git/config . Something like :
[remote "origin"]
url = git@gitorious.org:debian-stuff/yourpackage.git
fetch = +refs/heads/*:refs/remotes/origin/*
[branch "master"]
remote = origin
merge = refs/heads/master
The only thing left to do is to push everything on gitourious. the --all is important.
git push --all
People willing to pull your work from girourious have to follow the following script :
$git clone git@gitorious.org:debian-stuff/yourpackage.git
$cd yourpackage
$git branch -a
* master
remotes/origin/HEAD -> origin/master
remotes/origin/debian
remotes/origin/master
remotes/origin/upstream
$git checkout -t origin/debian
$git checkout -t origin/upstream
$git branch -a
debian
master
* upstream
remotes/origin/HEAD -> origin/master
remotes/origin/debian
remotes/origin/master
remotes/origin/upstream
$git checkout master
$git-buildpackage
Maybe there is an easier way to pull all remote branches at once, but I'm not aware of it. Any better way ?

15 November 2010

Manoj Srivastava: Manoj: Dear Lazyweb: How do you refresh or recreate your kvm virt periodically?

Dear Lazyweb, how do all y all using virts recreate the build machine setup periodically? I have tried and failed to get qemu-make-debian-root script work for me. Going through and redoing it from netinst ISO is an option but then I need debconf preseeding files, and I was wondering if there are some out there. And then there is the whole Oh, by the way, upgrade from Squeeze to Sid, please step. The less sexy alternative is going to the master copy and running a cron job to safe-upgrade each week, and re-creating any copy-on-write children. Would probably work, but I am betting there are less hackish solutions out there. First, some background. It has been an year since I interviewed for the job I currently hold. And nearly 10 months since I have been really active in Debian (apart from Debconf 10). Partly it was trying to perform well at the new job, partly it was getting permissions to work on Debian from my employer. Now that I think I have an handle on the job, and the process for getting permissions is coming to a positive end, I am looking towards getting my Debian processes and infrastructure back up to snuff. Before the interregnum, I used to have a UML machine setup to do builds. It was generated from scratch weekly using cron, and ran SELinux strict mode, and I used to have an automated ssh based script to build packages, and dump them on my box to test them. I had local git porcelain to do all this and tag releases, in a nice, effortless work flow. Now, the glory days of UML are long gone, and all the cool kids are using KVM. I have set up a kvm box, using a netinst ISO (like the majority of the HOWTO s say). I used madduck s old /etc/networking/interfaces set up to do networking using a public bridge (mostly because how cool his solution was, virsh can talk natively to a bridge for us now) and I have NFS, SELinux, ssh, and my remote build infrastructure all done, so I am ready to hop back into the fray once the lawyers actually ink the agreements. All I have to do is decide on how to refresh my build machines periodically. And I guess I should set up virsh, instead of having a shell alias around kvm. Just haven t gotten around to that.

16 August 2010

Martin F. Krafft: Happy birthday Debian

Dear Debian: I haven t had much of a chance to stay in touch lately, but I don t want to forget to wish you well on this 17th birthday of yours. You have set standards and you continue to do so. You are the operating system of choice, and you excel at it. Keep up the level of quality, and keep up the spirit. I am looking forward to more contact in the future. Thanks to everyone who has dedicated and (or) continues to dedicate their time to our project! Love, -m

19 April 2010

Martin F. Krafft: Orangutans at the Nestl shareholder meeting

Bravo Greenpeace Switzerland! At Nestl s annual shareholder meeting 2010 last week, you descended from the ceiling in the middle of the presentations with flyers and a banner asking for the company to take responsibility for their reckless actions in Indonesia. Thousands of square kilometres of forest are cleared every day so that companies like Nestl can make vast sums of money off consumers. Orangutans asking Nestl  for a break Meanwhile, Orangutans outside the venue were protesting Nestl and asking for a break (copying Nestl s own slogan Have a Break! Have a ). The Orang Utans are pushed towards extinction by capitalist interest. One of my closest friends was part of the act, and he recounts breaking into the ventilation system before sawing through the ceiling, and descending on a rope. The police detained them for more than 24 hours, but the message has been sent. Bravo! Read more (and watch videos of the spectacular descent) on the Greenpeace webpage, the Greenpeace press announcement and their blog (all in German), or on 24heures (in French). Planetsave has a decent coverage in English. NP: Emerson, Lake & Palmer: Brain Salad Surgery

17 April 2010

Martin F. Krafft: Planes or volcano

Via internotes.ch: Planes produce way more CO  than the volcano NP: Emerson, Lake & Palmer: Tarkus

11 March 2010

Martin F. Krafft: Splitting puppetd from puppetmaster

My relationship with Puppet is one of love and hate. I am forced to use it simply because there is no better tool around, but I hate it in so many ways that I don t even want to start to enumerate (hint: most have to do with Ruby, actually). Today I decided to put an end to one thing that has been driving me insane: the fact that puppetd (the client) and puppetmasterd (the server) use the same working directory, /var/lib/puppet. Since I consider and would like to treatthe machine on which puppetmasterd is running just another puppet client, I was running into funky issues related to SSL certificate confusion, obscure errors, and SSL revocation horrors. The following hence assumes that you have installed or are planning to install puppetd on the machine running your puppetmaster, and that you have two fully-qualified domain names for the machine. For instance, I run puppetmaster on vera.madduck.net, and puppetmaster.madduck.net is an alias for the same machine. I ll use these names in the following as examples. The following may be Debian-specific, as I am solely using the puppet and puppetmaster packages for my experimentation and verification. Your mileage may vary, but the concept shall be the same.
  1. Stop everything:
    /etc/init.d/puppetmaster stop
    /etc/init.d/puppet stop
    
    
    (also verify that you have not instructed cron to restart these services)
  2. Rename the working directory:
    mv /var/lib/puppet /var/lib/puppetmaster
    
    
    and amend /etc/puppet/puppet.conf accordingly:
    [main]
    #  
    vardir=/var/lib/puppetmaster
    ssldir=$vardir/ssl
    #  
    [puppetmasterd]
    certname=puppetmaster.madduck.net
    #  
    
    
    I am doing this in [main], planning to override it for puppetd later, because puppetd is the only program which makes sense to be separated from the rest. Since only the puppetmaster needs a special certificate name, that is set specifically in the [puppetmasterd] section. If you use apache2 or nginx in front of your puppetmasters, make sure to amend the SSL file locations in the virtual host definition and restart (!) the service. You can verify that the configuration has been amended by making sure that there is no output from the following command:
    # puppetmasterd --genconfig   grep -q '/var/lib/puppet/' && echo SOMETHING IS WRONG
    
    
  3. Now restart puppetmaster:
    /etc/init.d/puppetmaster start
    
    
    and verify that it starts. If your puppetmaster previously ran under a different name, it will create itself a new certificate and sign it. Since the client will get its own working directory (and thus a new SSL certificate), you want to remove all records of the old certificate:
    # puppetca --list --all
    + puppetmaster.madduck.net
    + vera.madduck.net
    # puppetca --clean vera.madduck.net
    
    
  4. Change the configuration file to tell puppetd about its working directory:
    [puppetd]
    server=puppetmaster.madduck.net
    vardir=/var/lib/puppetmaster
    ssldir=$vardir/ssl
    #  
    
    
    This you can verify with the following command, which should not print anything:
    # puppetd --genconfig   grep -q '/var/lib/puppet[^/]' && echo SOMETHING IS WRONG
    
    
  5. Now install puppet, or (re)start it if it s already installed:
    # /etc/init.d/puppet stop
    # puppetd --no-daemonize --onetime --verbose --waitforcert 30 &
    info: Creating a new SSL key for vera.madduck.net
    warning: peer certificate won't be verified in this SSL session
    info: Caching certificate for ca
    info: Creating a new SSL certificate request for vera.madduck.net
    # puppetca --list
    vera.madduck.net
    # puppetca --sign vera.madduck.net
    notice: Signed certificate request for vera.madduck.net
    notice: Removing file Puppet::SSL::CertificateRequest vera.madduck.net at '/var/lib/puppetmaster/ssl/ca/requests/vera.madduck.net.pem'
    # fg
    info: Caching certificate for vera.madduck.net
    info: Caching certificate_revocation_list for ca
    [ ]
    # puppetca --list --all
    + puppetmaster.madduck.net
    + vera.madduck.net
    # /etc/init.d/puppet start
    
    
    Do yourself the favour and check that it s all working.
  6. Optionally, you can now clean up the client stuff in the server s working directory, for instance like this (it worked for me, but this is the sledgehammer approach):
    # /etc/init.d/puppetmaster stop
    # cd /var/lib/puppetmaster
    # tar -cf /tmp/puppetmaster.workingdir-backup.tar .
    # find ../puppet -type f -printf '%P\n'   xargs rm
    # /etc/init.d/puppetmaster start
    
    
  7. If you stopped cron before (and your puppet recipes have not since restarted it):
    /etc/init.d/cron start
    
    
All done. I wish puppet, or at least Debian s puppet packages would do this by default. Please let me know if the above conversion works for you. Then I might start working on an automated migration. NP: Genesis: Selling England by the Pound

26 February 2010

Martin F. Krafft: ACTA leak: no surprises about transparency blockers

The most common criticism of the Anti-Counterfeiting Trade Agreement (ACTA) is the lack of transparency. Before the nations disclose the terms of the agreement under negotiation, we are unable to gain an idea of the big picture, let alone voice our opinions and push for changes. Our politicians don t want us to know. We rely on leaked documents for our information. This is backwards in a world where a state should represent its people. This smells foul to me. There are undoubtedly some good reasons for the treaty, and if we can contain worldwide, large-scale trade of counterfeited goods and medicine, then that would be a net benefit to us all. However, we must not allow certain governments to succomb to the pressure of (commercially-motivated) lobbyists, to extend that pressure onto other nations using trade as a means of pressure, and to slash our freedom as if it were an inconvenient obstacle in their way. Only if the terms under negotiation become publicly available, and the public is given a voice, then we can help our governments in entering an agreement that is in the interest of its people, rather than a threat to us. It is hardly surprising that total capitalist nation USA are the strongest opponents of transparency, because the public might delay or even prevent the treaty. I was also not surprised to see South Korea and Germany in the list of supporters of secrecy either. It is interesting to see that the leaders of Singapore, Belgium, Portugal, and Denmark also seem to believe that these negotiations should be withheld from the public. Does anyone know about Switzerland? I tip my hat to New Zealand, Canada, Australia, Netherlands, Sweden, Finland, Ireland, Hungary, Poland, Estonia, and Austria for their support of transparency.

22 February 2010

Russell Coker: Net Neutrality

Martin Krafft advocates a model of Internet access where advertisers pay for the Internet connection [1]. The first problem with this idea is the base cost of providing net access which in most cases is wires to the premises. Every service that involves a cable to someone s house (Cable TV, Cable/DSL net access, or phone service) has a base minimum monthly fee associated with the expense of installing and maintaining the cable and whatever hardware is at the other end of the cable. For DSL and basic phone service the pair of wires ends at an exchange and takes a share of the DSLAM or a telephone exchange. It seems that the minimum monthly cost for any wire that goes to the house in Australia is about $20. So if an advertiser makes $0.20 per click (which I believe to be higher than the average price for Google Adsense clicks) then the user would have to be clicking on adverts more than 3 times a day. This might be viable if an ISP runs a proxy that inserts adverts into all content (which technically shouldn t be that difficult). But modifying content in-transit to introduce adverts is something that the net-discrimination crowd can only dream about. 3G net access has the lowest per-user costs. Based on current data costs it seems possible for an ISP to run a 3G service with bills for users as low as $15 per annum if they don t transfer much data. Recouping $15 might be easy but it s also a small enough amount of money that most users won t mind paying it. What we really need is to have more competition in the 3G ISP business. When I investigated this issue last month I found that there are few 3G Internet providers in Australia and the cheapest is Dodo at $139 per annum with a 15G limit [2]. With a bit more competition I m sure that someone would offer a really cheap plan for 1G or 2G of data access in a year. Martin complains about users paying twice as users pay to access the network (which is like paying a taxi to get to the market), so that they can visit sites where advertisers make money showing ads to the visitor . But if the advertisers were to pay then there would be a lot of inefficiency in determining how much each advertiser should pay which would result in extra expenses and therefore providing the service would cost more. I don t think that paying for a taxi to get to the market is a bad thing, personally I prefer to save money and use a bus or tram. I think that the best analogy for this is comparing using your own choice of a bus, tram, taxi, etc to get to the market or having the market operator provide taxis for everyone and then make everything really expensive to cover the significant costs of doing so. Finally there is the issue of video transfer which uses up a lot of bandwidth. According to both industry rumor and traceroute there is a significant mirror of youtube content in Australia. This means that youtube downloads will be cheap local transfers not expensive international transfers. I expect that most multi-national CDNs have nodes in Australia. So for Australia at least video transfer would not be as expensive as many people expect. I think that to a large extent the concept of having content providers pay to host the content has been tried and found to fail. The Internet model of you pay for your net access, I pay for mine, and then we transfer data between our computers without limit has killed off all the closed services. Not only do I think that net-discrimination is a bad idea, I think that it would also turn out to be bad for business.

20 February 2010

Martin F. Krafft: Charge advertisers for the last mile

ISPs fight a raging war over net neutrality because their infrastructure cannot keep up with the increasing demand (or rather supply) of content. Therefore, ISPs want to charge the users premiums if they wish to use certain services on the Net. For instance, since videos are usually large in size, one would have to purchase e.g. the platinum package to be able to access video hosting sites. It would be a serious loss of freedom if they won, and the Internet would never be the same. Let s turn that idea around: since sites that use advertising make money off every visitor, they are really the ones that should pay the ISPs so that they can improve their infrastructure. The same applies to sites that make money off visitors in other ways. At the moment, users pay to access the network (which is like paying a taxi to get to the market), so that they can visit sites where advertisers make money showing ads to the visitor, which might actually let them to pay a manufacturer for a product the end user pays twice, and the advertisers take in money, leeching off the ISPs investing into their infrastructure. I think that the advertiser and not the consumer should pay the ISP to keep the infrastructure afloat improve it even. The manufacturer should then pay the advertisers for displaying the ad, and the user consumes if s/he chooses to and everyone only pays once, for services they want. This will help improve competition among providers, which should always be the goal. If my ISP would start to record the volume of HTTP traffic I produce for each target site, charge the targets appropriately (they could start with a couple at first), and I d get free connectivity in turn, I d be quite happy. The ISP wouldn t have to look at the contents at all for that. I don t yet know what to do if the target sites choose not to pay up. ISPs could block them, or throttle or deprioritise traffic, but either of those might simply lead to an exodus of users, just like premiums would. As usual, this just needs to be done by many ISPs in concert. Are you listening?

Martin F. Krafft: Making money off ethics

The coffee place around the corner from where Penny and I lived for the past two months Caff Mode offers to make your food using free-range eggs for NZ$1. Free-range eggs are more expensive than normal ones, but the price difference is not one dollar. Therefore, the cafe makes a profit every time a customer makes the right choice. I went in this morning to ask them about it, and the guy taking my coffee order admitted stale-mate. When I suggested that the cafe should use free-range eggs exclusively, he agreed. Let s hope that he lets those making that decision know, and that the cafe soon stops making money on ethical choices.

Next.

Previous.